Top related persons:
Top related locs:
Top related orgs:

Search resuls for: "Suresh Venkatasubramanian"


9 mentions found


AI is not ready for primetime
  + stars: | 2024-03-10 | by ( Samantha Murphy Kelly | ) edition.cnn.com   time to read: +7 min
Generative AI tools, including ChatGPT, have been alleged to violate copyright. That’s not stopping Big Tech companies and AI firms, which continue to hook consumers and businesses on new features and capabilities. “Access to major generative AI systems in widespread use is controlled by a few companies,” said Venkatasubramanian, noting that these systems easily make errors and can produce damaging content. He believes bolder reforms may be necessary too, such as taxing AI companies to fund social safety nets. For now, current day generative AI users must understand the limitations and challenges of using products that are still quite far from where they need to be.
Persons: , Taylor Swift, Joe Biden, Mandel Ngan, That’s, OpenAI, Elon Musk, Elon Musk Gonzalo Fuentes, ” Suresh Venkatasubramanian, Venkatasubramanian, ” Arvind Narayanan, CNN he’s, Narayanan, , ChatGPT, Bard –, haven’t, ” Gemini, Brian Fung Organizations: CNN, Chamber, Getty, Big Tech, Microsoft, Google, Reuters, Brown University, , White, Office of Science, Technology, Management, Executive, Princeton, “ Tech Locations: Washington ,, AFP, New Hampshire
While a number of AI systems have been found to discriminate, tipping the scales in favor of certain races, genders or incomes, there’s scant government oversight. Those bills, along with the over 400 AI-related bills being debated this year, were largely aimed at regulating smaller slices of AI. The use of AI to make consequential decisions — what the bills call “automated decision tools” — is pervasive but largely hidden. The AI was trained to assess new resumes by learning from past resumes — largely male applicants. Requirements to routinely test an AI system aren’t in most of the legislative proposals, nearly all of which still have a long road ahead.
Persons: ChatGPT, , Suresh Venkatasubramanian, Taylor Swift, , Christine Webber, Mary Louis, Louis, California’s, Craig Albright, ” Albright, it’s, Rebecca Bauer, Kahan, what’s, Trân Organizations: DENVER, Congress, Brown University, The Software Alliance, Fortune, Commission, Pew Research, Amazon, BSA, Microsoft, Associated Press Locations: statehouses, chatbots, California, Connecticut, guardrails, Massachusetts, Washington, Colorado, Rhode Island , Illinois , Connecticut, Virginia, Vermont, That’s, Sacramento , California
Seemingly overnight, the user-friendly generative AI technology enraptured the globe. It also promised to revolutionize the future of white-collar work — so long as it didn’t cause an AI apocalypse in the process. ‘The world woke up to the AI revolution’And one year since ChatGPT’s public release, the fervor around AI is still at a fever pitch. And AI’s long-prophesied impacts to the labor market is also beginning to emerge, both inside and outside the tech industry. “Many, many, many jobs that are currently done by humans, AI will be able to do,” said Clune, the AI researcher at the University of British Columbia.
Persons: New York CNN —, Sam Altman, ChatGPT, ” Jeff Clune, hasn’t, ” Clune, , OpenAI’s, Jakub Porzycki, Suresh Venkatasubramanian, ” Venkatasubramanian, “ It’s, it’s, Venkatasubramanian, Clune, we’re, ChatGPT’s, OpenAI, David Paul Morris, , CNN’s Kara Swisher, Altman, , ” Altman Organizations: New, New York CNN, Big Tech, Tech, University of British, CNN, ChatGPT’s, Brown University, Economic Cooperation, Bloomberg, Getty, Microsoft — Locations: New York, University of British Columbia, Krakow, Poland, OpenAI, Asia, San Francisco , California
Such recruitment-based adoptions are the most difficult to carry out, social workers say. Gonzaga, who worked with his wife Heather Setrakian at eharmony and then on the Family-Match algorithm, referred questions to Ramirez. Social workers say Family-Match works like this: Adults seeking to adopt submit survey responses via the algorithm’s online platform, and foster parents or social workers input each child’s information. Adoption-Share is part of a small cadre of organizations that say their algorithms can help social workers place children with foster or adoptive families. “It’s wasted time for social workers and wasted emotional experiences for children.”___Contact AP’s global investigative team at Investigative@ap.org or https://www.ap.org/tips/
Persons: , Thea Ramirez, Ramirez, ” Ramirez, “ There’s, , Bonni Goodwin, , Gian Gonzaga, Gonzaga, Heather Setrakian, Setrakian, Kristen Berry, ” Berry, Melania Trump, Virginia’s, Terry McAuliffe, Traci Jones, ” Jones, Virginia, Kylie Winton, Scott Stevens, Stevens, ” Jenn Petion, Petion, ” Petion, Fort, Bree Bofill, ” Bofill, Bofill, Ramirez didn’t, Ramirez wouldn’t, Suresh Venkatasubramanian, Biden, Connie, didn’t, We’ve, “ It’s Organizations: Associated Press, AP, University of Oklahoma, American Enterprise Institute, Democrat, , Virginia Department of Social Services, Georgia Department of Human Services, FamiliesFirst, Family, Family Support, Fort Myers, Children’s Network of Southwest, Miami, Care, Winton, AS GUINEA, Tennessee Department of Children’s Services, Tennessee, Biden White, Science, Technology, Brown University, U.S . Health, Human Services Department, Florida Department of Health, Health Locations: Virginia, Georgia, Tennessee, Florida, Brunswick , Georgia, eharmony, Pensacola, Jacksonville, Children’s Network of Southwest Florida, Virginia , Georgia, New York City , Delaware, Missouri, Investigative@ap.org
AI has been a source of deep personal interest for Biden, with its potential to affect the economy and national security. Using the Defense Production Act, the order will require leading AI developers to share safety test results and other information with the government. The National Institute of Standards and Technology is to create standards to ensure AI tools are safe and secure before public release. The official briefed reporters on condition of anonymity, as required by the White House. “He was as impressed and alarmed as anyone,” deputy White House chief of staff Bruce Reed said in an interview.
Persons: Joe Biden, Biden, Jeff Zients, ” Zients, , , Bruce Reed, David, Tom Cruise, Reed, Rishi Sunak, Kamala Harris, ReNika Moore, Suresh Venkatasubramanian, ” Venkatasubramanian Organizations: WASHINGTON, Democratic, National Institute of Standards, Technology, Commerce Department, White, AI, European, Google, Meta, Microsoft, American Civil Liberties Union, Biden Locations: Maine, Israel, San, U.S, European Union, China, Britain, West
But as more people turn to this buzzy technology for things like homework help, workplace research, or health inquiries, one of its biggest pitfalls is becoming increasingly apparent: AI models often just make things up. Researchers have come to refer to this tendency of AI models to spew inaccurate information as “hallucinations,” or even “confabulations,” as Meta’s AI chief said in a tweet. A number of high-profile hallucinations from AI tools have already made headlines. Cracking down on AI hallucinations, however, could limit AI tools’ ability to help people with more creative endeavors — like users that are asking ChatGPT to write poetry or song lyrics. How to prevent or fix AI hallucinations is a “point of active research,” Venkatasubramanian said, but at present is very complicated.
Persons: Suresh Venkatasubramanian, Venkatasubramanian, , ” Venkatasubramanian, West, Bard, James Webb, ChatGPT, they’re, ” West, Google’s Bard, OpenAI’s ChatGPT, Sundar Pichai, Pichai, , Sam Altman, OpenAI Organizations: CNN, Brown University, ” Companies, University of Washington, Center, Google, James Webb Space Telescope, New, CNET, CBS, Indraprastha, of Information Technology Locations: United States, New York, Delhi
OpenAI is taking up the mantle against AI "hallucinations," the company announced Wednesday, with a newer method for training artificial intelligence models. To date, Microsoft has invested more than $13 billion in OpenAI, and the startup's value has reached roughly $29 billion. AI hallucinations occur when models like OpenAI's ChatGPT or Google 's Bard fabricate information entirely, behaving as if they are spouting facts. OpenAI's potential new strategy for fighting the fabrications: Train AI models to reward themselves for each individual, correct step of reasoning when they're arriving at an answer, instead of just rewarding a correct final conclusion. OpenAI has released an accompanying dataset of 800,000 human labels it used to train the model mentioned in the research paper, Cobbe said.
Persons: OpenAI, Bard, James Webb, ChatGPT, they're, Karl Cobbe, Cobbe, Ben Winters, it's, Winters, Suresh Venkatasubramanian, Venkatasubramanian, … It's, Sarah Myers West, hasn't Organizations: Microsoft, James Webb Space, New, CNBC, Privacy, Center, Brown University Locations: OpenAI, New York
The European Union is at the forefront of drafting new AI rules that could set the global benchmark to address privacy and safety concerns that have arisen with the rapid advances in the generative AI technology behind OpenAI's ChatGPT. "If it's about protecting personal data, they apply data protection laws, if it's a threat to safety of people, there are regulations that have not been specifically defined for AI, but they are still applicable." Data protection authorities in France and Spain also launched in April probes into OpenAI's compliance with privacy laws. 'THINKING CREATIVELY'French data regulator CNIL has started "thinking creatively" about how existing laws might apply to AI, according to Bertrand Pailhes, its technology lead. "We are looking at the full range of effects, although our focus remains on data protection and privacy," he told Reuters.
On Monday, researcher Geoffrey Hinton, known as "The Godfather of AI," said he'd left his post at Google, citing concerns over potential threats from AI development. Google CEO Sundar Pichai talked last month about AI's "black box" problem, where even its developers don't always understand how the technology actually works. Among the other concerns: AI systems, left unchecked, can spread disinformation, allow companies to hoard users personal data without their knowledge, exhibit discriminatory bias or cede countless human jobs to machines. In the "Blueprint for an AI Bill of Rights," Venkatasubramanian helped lay out proposals for "ethical guardrails" that could safely govern and regulate the AI industry. With them in place, most people would barely notice the difference while using AI systems, he says.
Total: 9